Improving Age Invariant Face Recognition System Using Facial Features

 

A. Surendar

Assistant Professor, School of Electronics, Vignan’s University, Andhra Pradesh

*Corresponding Author E-mail:  

 

ABSTRACT:

Automatic face recognition system poses a various difficult problems using different age faces.  Most of the face recognitions have addressed the aging variation problems like age simulation or age estimation. Age invariant is a complex process that affects both the face texture and the shape of the face. In automatic face recognition system these texture and shape changes degrade the performance. The problem is designing an effective matching framework face recognition system using the different aging images. In this paper to develop a model for automatic age invariant face recognition system using facial features. In this model, extract the facial features like eye, mouth and nose in the given image based on the illumination method. Once the eye, nose, and mouth are detected, need to align the face. The face alignment based on the angle between the eye and mouth coordinates. Once the face alignment completed, calculate the eigenvalue and eigenvectors of the covariance matrix. Find the difference between all the features to get a difference vector. The eigenfaces calculates the eigenvalues and eigenvectors. Compare these distance vectors for all faces to get a most match value or minimum mismatch value. Give the face with the most matches or minimum mismatch value as the output. Experimental results show that this method outperforms a using of face aging data sets of FG-NET. A fusion of age invariant face recognition using facial features model further improves the accuracy of the face matching in the presence of aging.

 

KEYWORDS:  Face recognition, age invariant, extraction, face alignment, PCA, FG-NET.

 

 

 


INTRODUCTION:

One of the types of biometrics based authentication system is face recognition. It is a most popular area of computer vision research. Image analysis and understandings are one of the most common successful applications. Face recognition used for two reasons. The reasons are wide range of applications in law enforcement and the availability of feasible technology. Figure. 1 is the face recognition system. Here the input of this system is always an image or video stream. The output is verification of the subjects that appear in the image or video. The above system is a three step process. Face detection and feature extractions are run simultaneously.

 

 

The process of extracting faces from the scenes is face detection. The system identifies a certain image area as a face. Feature extraction is relevant facial features from the data. In the last stage the system does recognize the face. Face recognition used for many applications such as information security, smart cards, surveillance, and law enforcement. Humans are easily recognizing the faces. Now machines are learning and improving this task.

Figure 1 A Face Recognition System.

 

RELATED WORKS:

Face recognition is the most important area in image processing. Automatic face recognition is the most important and challenging application of the biometric. The main challenge in face recognitions are pose, illumination, expression, and aging. These challenges are commonly encountered in face recognition. In an early stage the first semi automated face recognition system was developed. In this semi automated face recognition locate the features like eyes, ears, mouth and nose on the images. It is very slow in process and also the human operator select some face coordinates and then computes the information for recognition. In [2], developed the automatic face recognition system this is the next step on face recognition. It shows how the human faces are identified. It is identifying the human faces using the 21 specific subjective markers such as hair colour; lip thickness etc., the main problem of the early stage systems is, computations are manually done. In [3], face recognition system further improve the new techniques based on the eigenfaces. Using the eigenfaces recognize the human faces. The result of the eigenfaces is the reliable and real time automated face recognition system was possible.

 

Most of the face recognition system faces the many problems. The problems are pose variations, illumination of the image, expression of the images. The face recognition systems are mostly recognized the frontal face only. The side face images are very difficult to recognize in the recognition techniques. The accuracy of the face reorganization is usually limited by some variations. These variations are based on the large intra subject variations such as expression, lighting, pose, and age [11]. Face recognition work is mainly focused on balancing for the variations. These variations are degrading the performance of the face recognition. Compared with other variations, such as pose, expression and illumination. Among these variations, age invariants to increase the attention in the automatic face recognition system. Figure. 2show the some of the subject variations such as pose, expression, illumination, and aging commonly encountered the face recognition system. Designing an appropriate age invariant face recognition system is necessary in most of the applications, particularly checking whether the person has been issued the various government documents for example passports and driver license [8], [9].

 

             09               11            12            14             16

Figure 2. Example images showing some subject variations for the FG-NET databse [4].

In [6], studied age differences how affect the performance of face recognition in a real passport verification. In this experimental results show that the recognition process of age does increase the difficulty, but it does not improve the effects of expression or illumination. Studies on across age face verification [9] shown challenging task in simulation of texture and shape variations. Age invariant face recognition published approches are limited because most of the algorithms focused facial aging problem on aging simulation [1], [13], [14] and age estimation[7], [15]. One of the sucessful approaches of face recognition across the age invariant is to build a 2-D or 3-D generative model [1], [14], [15].This model compansate for the aging process in age estimation or face matching. These methods first compared to the gallery image to compenstate for the age effect. Some limitations in age invariant face recogntion of this model[16]. First, difficult to construct the face models and the aging process is not well, particularly when the training sample image size is limited. Further, the facial aging process model is very compex, consequently, and parametric assumptions are strongly needed. Second, for constructing the age invarint model, additional informations are needed like each face images of landmark point locations and true ages of the faces. The set of the images shoud be captured controlled conditions such as normal illumination,  frontal pose, and natural expressions. These constraints are not easy to satisfy in practicaly.

 

Overcome these problems, using a discriminative aging model [5], [6] which used for face representation, combined with gradiant oriented pyramid and support vector machine for verifying faces across ages. Using the algorithms PCA and EBGM on a data set, reported the relationship between accuracy of the recogntion and age gap. It showed some improvement in matching based on thegender, race, height, and weight. A discriminative model address the age variations, and also handle the intra-user varions such as pose, expreesions, and illumination. It differes from other modelin both classification and representation of the feature. To be charcterized the entire face image by a single image descriptor is difficult, a dicscriminative model use a patch-based local feature representation like Scale Invariant Feature Transform (SIFT) [3] and Multi-scale Local Binary Pattern (MLBP) [6]. The discriminative model reduce the dimensionality of the image using the multi-feature discriminent analysis (MFDA). This discriminative approach to match the face images of the same person at different ages.

 

PROPOSED SYSTEM:

The proposed age invariant face recogntion model consist of the process likeeye extraction, mouth and nose extraction, face alignment and matching process. Above mentioned concepts are described in the following subsections.

 

A. Detecting Eye:

In proposed system first locate and extract the eyes in the given input image. Illumination based method is favoured for the locate and extract the eyes in the facial images. It is based on the eye regions are compared to other parts of a face, the observation that eyes have high density of edges, regions have dark colour and low illumination. The first step of this method is to extract colour, edge density and illimination seperately in the given facial images. Based on the shape of eyes rules some of the regions are removed. The remaining connected regions are combined to determine the eye regions. To further remove the false eye regions to apply the shape based rules. As part of the illumination based method there are two separate eye maps, one is luminance component of the image, and the other is chroma component. The eye map from chroma with low R and high B values of the eyes calcuated from:

 

                     (1)

 

Where R,B and  are the normalised red, blue and negative of red chroma components respectively. To construct the eye map from component of luma, morphological operations like erosion and dilation are done. In the eye regions the brighter and dark pixels in the luma components are emphasise. Luma componant of eye map is calculated from:

 

                    (2)

 

Where L(x,y) is the image  luma component and S(x,y) is structuring element. The sysmols Θ and represent morphological erosion and dilation respectively. The croma and luma componanet of maps are combined into a single map get the final result of detected eye.

 

B. Detecting Mouth:

To locate the mouth region, the existence of weaker blue components and stronger red components (R > B) must be found to construction of the mouth map.

 

                                       (3)

 

                                      (4)

Where p is the number of pixels on the given face images. The position of the mouth region is locate the little higher than the global minimum threshold. Due to more raddish skin parts some errors are occured, using or applying the different low level image processing method to resolve this problem. Once find the location of the eyes, check  wheter the eyes lie on the horizontal line; if not lie on the horizontal line find the angle of its deferment. The mouth also aligned similar way. So define the structuring element, a line which is orinted with an angle, but proportional to the face width. Smaller than the structuring element points are deleted using the erosion and a line aligned using the dilation. Finally extract the mouth and eye regions.

 

C. Detecting Nose:

The eye and mouth section of the human face is extracted taking the lower and upper co-ordinates of the edges. After extraction of the mouth and eyes, the section between the upper co-ordinates of the mouth and lower co-ordinate of the eye is demarcated as the nose. That part of the region is binarised, the binarisation will be the white regions where the threshold less than the intensity and the rest is black.

 

D. Face Alignment:

Once the mouth and eye map are determined, need to align the face, if the face is inclined, i.e. to make the mouth and eye parallel to the horizontal axis and the nose parallel to the vertical axis. For this using the eye map compute the coordinates between the two eyes center posiotion and coordinates of the lower lip center position. Find the angle between these two points and the middle of the face vertical axis is calculated as follows:

 

                                                     (5)

 

Where,

 

                                                    (6)

 

                      (7)

 

Using the angle a the face is aligned.

 

E. Face Recognition:

Once the face is aligned, calculate the average values for pairs like the distance between eye centers and lips center and the distance between the nose and eyes center.Take any pair of face images, find the diffrence between all the features and get the diffrence vectors. Using this diffrence vectors take the mninimum mismatch values and compare this value to all the faces. Finally getthe output of the face with the values of minimum mismatch.

RESULTS AND DISCUSSION:

A. Database:

FG-NET database is the well known public domain to evaluate the facial aging models. The FG-NET databse contains the 82 subjects of 1002 face images at diffrent ages, with maximum age being 69 and minimum age being 0. The proposed system have used the complete database of FG-NET.

 

B. Experiments on the FG-NET database:

In this section, report on experiments on a large aging data sets, that are available in public. There are several public domains are available of face data sets like AR, FERET, XMVTS, only a few are used or construct the aging problem. Face aging database desired attributes are large number of subject and images, these images should not have more variations in illumination, pose, and expression. Table 1 show the details of public domain facial aging database FG-NET.

 

Table 1 Public Domain Facial Aging Database

DATA

BASE

#sub

Male

Female

Total # Image

Age Range

FG-NET

82

48

34

1002

0-69

 

Evaluate the performance of the eye detection, the proposed method is outperforms the other methods like colour-based method and edge density method. In colour based method the assumption that the darkest regions of the face is the eye. Apart from the eyes, other dark regions are extracted. So in the colour based method less perform than illumination based method. In edge density method, it fails when the input image is poorly illuminated or the eyes are closed. So compare these methods the illumination based method give the outperforms using the FG-NET database.

 

 

                                 (a)                       (b)

 

 

                                   (c)                       (d)

Figure 3 (a) Original image (b) Chrominance map (c) Luminance map and (d) Eye map.

 

In figure 3  shows the result of the eye map using the FG-NET database, chrominance map and luminance map of the eye is shown in figure (b) and (c), these two maps are combined and give the output (d) eye map of the input image. Figure 4 show the result of mouth map using the FG-NET database.

 

Figure 4 Mouth map.

 

The proposed face alignment method is outperforms to compare the existing methods, like 3-D aging similation. In an existing system strong assemptions are needed to align the face compare the proposed system method, to find the angle of the two points and then aligned the face. Figure 5 shows the original position of the face (a) i.e, inclined face after find the angle the face was aligned (b). Compare to existing method the proposed method works effectively.

 

                                  (a)                        (b)

Figure 5 (a) Original image and (b) Aligned image.

 

                                  (a)                     (b)

 

                                              (c)

Figure 6 (a) Input image  (b) Recognized image and (c) Histogram equlization

 

Matching process compare the other or existing techniques, the proposed system take the facial features. Existing systems are not concentrate the facial feature like eyes, nose and mouth etc., Compare the existing system, the proposed system outperforms the matching process using the FG-NET databse. Figure 6 shows the final output of the proposed system and the histogram of the image, the input image is at the age of 8 and the output of recognized image is at the age of 16. Compare these input image and recognized image the peak signal noise ratio is 20.763 decibel, if the peak signal noise ratio rate high means then the matching accuracy of the image perform very well.

 

CONCLUSION:

A model for using facial features age invariant face recognition system is proposed. The proposed approach focus the face aging problem. Most of the face recognition not dealwith the aging model, deal only pose, illuminarion and expressions.The proposed model, first find and extract the facial features like eyes, mouth, and nose. These are done by using the illumination based method eyemap and mouth map. In order to overcome the inclined face to align the face using the facial feature extraction, and calculate the distance vector of these features. Compare these distance vector to all faces get minimum mismatch value or the maximum match value is the output. Experimental results are done by using the FG-NET database.The proposed system performance is more accuracy to comapre the other existing methods. Facial aging is a challenging task that will require to further improve the fusion framwork for aging model and accuracy of the matching performance.

 

REFERENCES:

1.       A.Lanitis, C. Taylor, and T. Cootes, “Toward automatic simulation of aging effects on face images,” IEEE Trans. Pattern Anal.Mach. Intell., vol. 24, no. 4, pp. 442–455, Apr. 2002.

2.       A.J. Goldstein, L.D. Harmon, and A.B. Lesk, “Identification of human faces,” IEEE Proceedings, vol. 59, no. 5, pp. 748-760, June. 2005.

3.       D. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis., vol. 60, no. 2, pp. 91–110, 2004.

4.       FG-NET Aging Database [Online]. Available: http://www.fgnet.rsunit.com/

5.       H. Ling, S. Soatto, N. Ramanathan, and D. Jacobs, “A Study of Face Recognition as People Age,” Proc. IEEE Int’l Conf. Computer Vision, pp. 1-8, 2007.

6.       H. Ling, S. Soatto, N. Ramanathan, and D. Jacobs, “Face verification across age progression using discriminativemethods,” IEEE Trans. Inf. Forensic Security, vol. 5, no. 1, pp. 82–91, Mar. 2010.

7.       M. Albert, K. Ricanek, and E. Patterson, “A review of the literature on the aging adult skull and face: Implications for forensic science research and applications,” J. Forensic Science Int. Vol. 172, no. 1, pp. 1-9, Oct. 2007.

8.       N. Ramanathan and R. Chellappa, “Computational Methods for modelling facial aging: A survey,” J. Vis. Languages Comput., vol. 20, no.3, pp. 131–144, Jun. 2009.

9.       N. Ramanathan and R. Chellappa, “Face Verification Across Age Progression,” Proc. IEEE CS Conf. Computer Vision and Pattern Recognition, vol. 2, pp. 462-469, 2005.

10.     N. Ramanathan and R. Chellappa, “Face verification across age progression,” IEEE Trans. Image Process., vol. 15, no. 11, pp. 3349–3361, Nov. 2006.

11.     P.J. Phillips, W.T. Scruggs, A.J. O’Toole, P.J. Flynn, K.W. Bowyer, C.L. Schott, and M. Sharpe, “FRVT 2006 and ICE 2006 Large-Scale Results,” Technical Report NISTIR 7408, Nat’l Inst. of Standards and Technology, Mar. 2007.

12.     S. Zhou, B. Georgescu, X. Zhou, and D. Comaniciu, “Image based regression using boosting method,” in Proc. IEEE Int. Conf. Comput. Vis., 2005, vol. 1, pp. 541–548.

13.     T. Ojala, M. Pietikainen, and T. Maenpaa, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, no. 7, pp. 971–987, Jul. 2002.

14.     U. Park, Y. Tong, and A. K. Jain, “Age invariant face recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 5, pp. 947–954, May 2010.

15.     X. Geng, Z. Zhou, and K. Smith-Miles, “Automatic age estimation based on facial aging patterns,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 12, pp. 2234–2240, Dec. 2007.

16.     Zhifeng Li, Unsang Park, Jain A.K.,“A Discriminantive Model for Age Invariant Face Recognition,” Proc. IEEE Transactionon Information Forensics and Security, Vol. 6, No. 3, pp. 1028-1037, Sep. 2011.

 

 

 

 

 

 

Received on 19.04.2017          Modified on 15.05.2017

Accepted on 23.05.2017        © RJPT All right reserved

Research J. Pharm. and Tech. 2017; 10(6): 1762-1766.

DOI: 10.5958/0974-360X.2017.00311.0